Elasticnet

\[ Y = f(X) \]

\[ y = f(x) = a + bx \]

For xboost you do need dummy variables.

Trees. Figure out in bins.

Room, e.g. simple average. Deviding population in buckets –> how do people define the buckets?

Decision tree –> bucket splits.

$$ = ^M _mI(xR_m)

$$

Decision in the decision tree is making the splits

cat –> categorisation and decision tree

Breaking data in M regions. For each region the function is becoming the average of a region. You are in a region or not.

AI: 95% time is logistic regression or if/else statement

library(coefplot)
Loading required package: ggplot2
library(xgboost)
library(magrittr)
library(dygraphs)
library(useful)
land_train <- readRDS('data/manhattan_Train.rds')
land_test <- readRDS('data/manhattan_Test.rds')
land_val <- readRDS('data/manhattan_Validate.rds')

XGboost written by computer engineers so therefore they want an integer.

set.seed(1123)
table(land_train$HistoricDistrict)

   No   Yes 
23791  8459 
histFormula <- HistoricDistrict ~ FireService + 
  ZoneDist1 + ZoneDist2 + Class + LandUse + 
  OwnerType + LotArea + BldgArea + ComArea + 
  ResArea + OfficeArea + RetailArea + 
  GarageArea + FactryArea + NumBldgs + 
  NumFloors + UnitsRes + UnitsTotal + 
  LotFront + LotDepth + BldgFront + 
  BldgDepth + LotType + Landmark + BuiltFAR +
  Built + TotalValue - 1 # subtracting intersept since a tree does not need an intercept
landX_train <- build.x(histFormula, data=land_train, contrasts=FALSE, sparse=TRUE)
landY_train <- build.y(histFormula, data=land_train) %>% 
  as.integer() - 1
landX_test <- build.x(histFormula, data=land_test, contrasts=FALSE, sparse=TRUE)
landY_test <- build.y(histFormula, data=land_test) %>% 
  as.integer() - 1
landX_val <- build.x(histFormula, data=land_val, contrasts=FALSE, sparse=TRUE)
landY_val <- build.y(histFormula, data=land_val) %>% 
  as.integer() - 1
landY_train
   [1] 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1
  [49] 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
  [97] 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1
 [145] 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1
 [193] 1 1 1 1 0 1 1 1 0 1 0 0 1 1 1 0 0 0 0 1 1 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 0
 [241] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [289] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0
 [337] 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0
 [385] 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
 [433] 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 0 0 1 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [481] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [529] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0
 [577] 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0
 [625] 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0
 [673] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [721] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [769] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [817] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [865] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [913] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [961] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [ reached getOption("max.print") -- omitted 31250 entries ]
# data has to be converted into integer therfore used the '%>%' pipe

Xgboost does validation but does not do it clean. Third dataset is used for ….

Xboost want things like a certain way, as XGboost objects. It is like a list holding objects

xgTrain <- xgb.DMatrix(data=landX_train, label=landY_train)
xgVal <- xgb.DMatrix(data=landX_val, label=landY_val)

Fit the first model.

Rpart –> recursive partitioning.

xg1 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',   # what you want to accomplish, like a cost function
  nrounds=1
)

If it will be countinous: ‘linear:regression’ or something like that

We have built the tree

xgb.plot.multi.trees(xg1, feature_names=colnames(landX_train))

You let the computer the designs for you.

The default maximum it can go is 60 in a tree. Default is six (this is a type of hyper parameter).

Is this better than random forest? This is still one tree. It is understandabele, but also highly variabele. You want stable results. Tree is averaged and will do better at different parts of the room.

When you build the tree, you use all the data. With random forest you random your sample and random your columns. –> Bagging (Bootstrap Aggregated) –> Great, greatest of all time. Fast, blackboxed.

Then, Boosting –> Most famous for trees –> but can be used for everything

THey boosted whatever the wanted to boost.

Boosting. Workst for anything. You fit a model, see how well you did. Use that to adjust your weights for next model. E.g. next model fitted on residuals(errors) of prior model… and so fort and keep stacking itself. Model –> Adjust weights –> model After that a normalization step (like in case of category probabilities)

\[ \hat{y}_i^t = \sum_{k=1}^tf_k(x_i) = \hat{y}_i{(t-1)} + f_t(x_i) \]

It does not matter what model it is, it is about which one predicts the best.

xg2 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=1,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)
[1] train-logloss:0.584478 

\[ \text{logloss}=ylog(p) _ (1-y)log(1-p_i) \]

Additive trees. In random forest, you independently train

Boosting, trees on top of that

Instead of one tree lets build 100 trees. The logloss will go down. Just by Boosting.

xg3 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=100,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)
[1] train-logloss:0.584478 
[2] train-logloss:0.523565 
[3] train-logloss:0.482857 
[4] train-logloss:0.455946 
[5] train-logloss:0.434840 
[6] train-logloss:0.421669 
[7] train-logloss:0.409734 
[8] train-logloss:0.401861 
[9] train-logloss:0.395607 
[10]    train-logloss:0.387369 
[11]    train-logloss:0.382291 
[12]    train-logloss:0.377313 
[13]    train-logloss:0.374346 
[14]    train-logloss:0.365464 
[15]    train-logloss:0.362735 
[16]    train-logloss:0.356429 
[17]    train-logloss:0.354169 
[18]    train-logloss:0.350926 
[19]    train-logloss:0.348293 
[20]    train-logloss:0.346368 
[21]    train-logloss:0.342958 
[22]    train-logloss:0.340219 
[23]    train-logloss:0.338344 
[24]    train-logloss:0.336832 
[25]    train-logloss:0.336059 
[26]    train-logloss:0.333087 
[27]    train-logloss:0.329905 
[28]    train-logloss:0.328826 
[29]    train-logloss:0.325942 
[30]    train-logloss:0.324408 
[31]    train-logloss:0.323443 
[32]    train-logloss:0.317896 
[33]    train-logloss:0.313644 
[34]    train-logloss:0.311702 
[35]    train-logloss:0.310234 
[36]    train-logloss:0.309521 
[37]    train-logloss:0.307625 
[38]    train-logloss:0.305570 
[39]    train-logloss:0.302278 
[40]    train-logloss:0.300768 
[41]    train-logloss:0.299990 
[42]    train-logloss:0.299424 
[43]    train-logloss:0.296827 
[44]    train-logloss:0.295968 
[45]    train-logloss:0.294801 
[46]    train-logloss:0.294090 
[47]    train-logloss:0.292859 
[48]    train-logloss:0.292619 
[49]    train-logloss:0.291970 
[50]    train-logloss:0.290782 
[51]    train-logloss:0.288655 
[52]    train-logloss:0.287187 
[53]    train-logloss:0.285326 
[54]    train-logloss:0.282312 
[55]    train-logloss:0.281620 
[56]    train-logloss:0.280646 
[57]    train-logloss:0.279199 
[58]    train-logloss:0.277831 
[59]    train-logloss:0.274718 
[60]    train-logloss:0.273355 
[61]    train-logloss:0.271651 
[62]    train-logloss:0.269864 
[63]    train-logloss:0.268158 
[64]    train-logloss:0.266182 
[65]    train-logloss:0.265149 
[66]    train-logloss:0.264410 
[67]    train-logloss:0.264073 
[68]    train-logloss:0.263613 
[69]    train-logloss:0.262498 
[70]    train-logloss:0.260425 
[71]    train-logloss:0.259115 
[72]    train-logloss:0.258620 
[73]    train-logloss:0.257172 
[74]    train-logloss:0.255226 
[75]    train-logloss:0.254464 
[76]    train-logloss:0.254263 
[77]    train-logloss:0.253777 
[78]    train-logloss:0.253593 
[79]    train-logloss:0.253233 
[80]    train-logloss:0.252334 
[81]    train-logloss:0.250384 
[82]    train-logloss:0.249105 
[83]    train-logloss:0.248248 
[84]    train-logloss:0.246005 
[85]    train-logloss:0.244575 
[86]    train-logloss:0.243416 
[87]    train-logloss:0.241841 
[88]    train-logloss:0.241356 
[89]    train-logloss:0.240410 
[90]    train-logloss:0.239526 
[91]    train-logloss:0.239148 
[92]    train-logloss:0.238954 
[93]    train-logloss:0.237618 
[94]    train-logloss:0.237368 
[95]    train-logloss:0.237292 
[96]    train-logloss:0.237214 
[97]    train-logloss:0.237133 
[98]    train-logloss:0.235804 
[99]    train-logloss:0.235228 
[100]   train-logloss:0.234987 
xg4 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)

You can keep boosting forever, but it might be overfitting? This is on the training data it alwways goes better on the traiing data.

–> More dept (more leaves), greater chance to overfitting

Validate data

xg5 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70
)
[1] train-error:0.201085    validate-error:0.208915 
Multiple eval metrics are present. Will use validate_error for early stopping.
Will train until validate_error hasn't improved in 70 rounds.

[2] train-error:0.193984    validate-error:0.198628 
[3] train-error:0.190543    validate-error:0.198873 
[4] train-error:0.188713    validate-error:0.193485 
[5] train-error:0.179566    validate-error:0.186627 
[6] train-error:0.177612    validate-error:0.187852 
[7] train-error:0.174326    validate-error:0.187607 
[8] train-error:0.171938    validate-error:0.181974 
[9] train-error:0.171504    validate-error:0.182954 
[10]    train-error:0.166016    validate-error:0.181729 
[11]    train-error:0.164651    validate-error:0.181729 
[12]    train-error:0.162419    validate-error:0.178790 
[13]    train-error:0.161488    validate-error:0.178300 
[14]    train-error:0.156992    validate-error:0.175361 
[15]    train-error:0.155938    validate-error:0.174626 
[16]    train-error:0.152837    validate-error:0.173157 
[17]    train-error:0.151969    validate-error:0.171198 
[18]    train-error:0.150884    validate-error:0.170708 
[19]    train-error:0.149984    validate-error:0.169973 
[20]    train-error:0.149395    validate-error:0.169728 
[21]    train-error:0.147411    validate-error:0.170218 
[22]    train-error:0.146171    validate-error:0.168748 
[23]    train-error:0.145302    validate-error:0.167769 
[24]    train-error:0.144248    validate-error:0.167034 
[25]    train-error:0.143938    validate-error:0.167769 
[26]    train-error:0.142853    validate-error:0.166299 
[27]    train-error:0.141209    validate-error:0.164095 
[28]    train-error:0.140682    validate-error:0.163605 
[29]    train-error:0.139225    validate-error:0.162381 
[30]    train-error:0.138202    validate-error:0.162626 
[31]    train-error:0.137674    validate-error:0.162136 
[32]    train-error:0.134326    validate-error:0.159197 
[33]    train-error:0.132527    validate-error:0.159931 
[34]    train-error:0.131659    validate-error:0.158707 
[35]    train-error:0.130667    validate-error:0.158462 
[36]    train-error:0.130202    validate-error:0.158707 
[37]    train-error:0.129488    validate-error:0.159442 
[38]    train-error:0.129147    validate-error:0.157972 
[39]    train-error:0.127535    validate-error:0.158952 
[40]    train-error:0.126357    validate-error:0.158707 
[41]    train-error:0.126264    validate-error:0.158217 
[42]    train-error:0.125705    validate-error:0.159197 
[43]    train-error:0.123969    validate-error:0.157972 
[44]    train-error:0.123566    validate-error:0.157727 
[45]    train-error:0.122512    validate-error:0.156013 
[46]    train-error:0.122357    validate-error:0.156013 
[47]    train-error:0.121519    validate-error:0.155768 
[48]    train-error:0.121426    validate-error:0.155768 
[49]    train-error:0.121240    validate-error:0.154543 
[50]    train-error:0.120992    validate-error:0.153808 
[51]    train-error:0.119876    validate-error:0.152094 
[52]    train-error:0.118853    validate-error:0.153564 
[53]    train-error:0.118264    validate-error:0.151359 
[54]    train-error:0.116124    validate-error:0.151604 
[55]    train-error:0.115814    validate-error:0.151604 
[56]    train-error:0.115132    validate-error:0.150869 
[57]    train-error:0.114388    validate-error:0.151114 
[58]    train-error:0.113395    validate-error:0.151604 
[59]    train-error:0.112341    validate-error:0.150869 
[60]    train-error:0.111628    validate-error:0.149890 
[61]    train-error:0.110822    validate-error:0.150380 
[62]    train-error:0.110388    validate-error:0.150869 
[63]    train-error:0.109457    validate-error:0.151114 
[64]    train-error:0.108248    validate-error:0.149155 
[65]    train-error:0.108093    validate-error:0.148910 
[66]    train-error:0.107504    validate-error:0.149645 
[67]    train-error:0.107132    validate-error:0.149645 
[68]    train-error:0.106791    validate-error:0.149155 
[69]    train-error:0.106295    validate-error:0.148175 
[70]    train-error:0.105364    validate-error:0.148665 
[71]    train-error:0.104744    validate-error:0.148665 
[72]    train-error:0.104651    validate-error:0.148175 
[73]    train-error:0.103659    validate-error:0.148910 
[74]    train-error:0.102667    validate-error:0.148420 
[75]    train-error:0.101953    validate-error:0.148910 
[76]    train-error:0.101984    validate-error:0.149155 
[77]    train-error:0.101736    validate-error:0.148910 
[78]    train-error:0.101488    validate-error:0.148910 
[79]    train-error:0.101240    validate-error:0.148910 
[80]    train-error:0.100744    validate-error:0.148665 
[81]    train-error:0.099473    validate-error:0.147441 
[82]    train-error:0.098760    validate-error:0.147196 
[83]    train-error:0.098109    validate-error:0.147196 
[84]    train-error:0.096713    validate-error:0.147441 
[85]    train-error:0.095969    validate-error:0.149155 
[86]    train-error:0.095287    validate-error:0.148665 
[87]    train-error:0.094636    validate-error:0.147930 
[88]    train-error:0.094326    validate-error:0.147686 
[89]    train-error:0.093767    validate-error:0.148175 
[90]    train-error:0.092868    validate-error:0.148420 
[91]    train-error:0.092713    validate-error:0.148910 
[92]    train-error:0.092589    validate-error:0.148665 
[93]    train-error:0.091008    validate-error:0.148420 
[94]    train-error:0.090729    validate-error:0.148665 
[95]    train-error:0.090884    validate-error:0.148665 
[96]    train-error:0.091070    validate-error:0.148175 
[97]    train-error:0.091070    validate-error:0.148420 
[98]    train-error:0.089984    validate-error:0.147441 
[99]    train-error:0.089953    validate-error:0.147441 
[100]   train-error:0.089984    validate-error:0.148175 
[101]   train-error:0.089178    validate-error:0.147686 
[102]   train-error:0.088868    validate-error:0.147930 
[103]   train-error:0.088341    validate-error:0.147686 
[104]   train-error:0.087907    validate-error:0.147441 
[105]   train-error:0.087504    validate-error:0.147686 
[106]   train-error:0.086729    validate-error:0.147441 
[107]   train-error:0.086171    validate-error:0.147930 
[108]   train-error:0.085643    validate-error:0.146461 
[109]   train-error:0.084620    validate-error:0.145971 
[110]   train-error:0.084651    validate-error:0.145726 
[111]   train-error:0.084155    validate-error:0.146951 
[112]   train-error:0.083814    validate-error:0.147196 
[113]   train-error:0.083256    validate-error:0.148175 
[114]   train-error:0.082884    validate-error:0.148175 
[115]   train-error:0.082016    validate-error:0.147441 
[116]   train-error:0.081891    validate-error:0.147441 
[117]   train-error:0.081085    validate-error:0.146461 
[118]   train-error:0.080589    validate-error:0.147196 
[119]   train-error:0.080279    validate-error:0.146216 
[120]   train-error:0.079876    validate-error:0.145971 
[121]   train-error:0.079349    validate-error:0.145971 
[122]   train-error:0.078915    validate-error:0.145726 
[123]   train-error:0.078791    validate-error:0.145726 
[124]   train-error:0.078078    validate-error:0.147686 
[125]   train-error:0.077364    validate-error:0.148910 
[126]   train-error:0.076341    validate-error:0.147441 
[127]   train-error:0.076062    validate-error:0.147686 
[128]   train-error:0.075101    validate-error:0.146951 
[129]   train-error:0.074822    validate-error:0.146951 
[130]   train-error:0.074171    validate-error:0.146706 
[131]   train-error:0.073922    validate-error:0.148175 
[132]   train-error:0.073488    validate-error:0.148175 
[133]   train-error:0.073457    validate-error:0.148420 
[134]   train-error:0.073550    validate-error:0.148665 
[135]   train-error:0.072713    validate-error:0.146706 
[136]   train-error:0.072620    validate-error:0.146951 
[137]   train-error:0.072062    validate-error:0.148175 
[138]   train-error:0.072000    validate-error:0.148420 
[139]   train-error:0.072000    validate-error:0.147686 
[140]   train-error:0.071380    validate-error:0.148420 
[141]   train-error:0.071318    validate-error:0.148175 
[142]   train-error:0.070729    validate-error:0.148665 
[143]   train-error:0.070419    validate-error:0.146461 
[144]   train-error:0.070109    validate-error:0.146461 
[145]   train-error:0.069891    validate-error:0.147441 
[146]   train-error:0.069581    validate-error:0.146951 
[147]   train-error:0.069612    validate-error:0.146951 
[148]   train-error:0.069271    validate-error:0.147930 
[149]   train-error:0.068868    validate-error:0.148420 
[150]   train-error:0.068527    validate-error:0.148665 
[151]   train-error:0.068341    validate-error:0.148665 
[152]   train-error:0.068155    validate-error:0.149155 
[153]   train-error:0.067628    validate-error:0.148910 
[154]   train-error:0.067473    validate-error:0.148910 
[155]   train-error:0.066791    validate-error:0.149155 
[156]   train-error:0.066760    validate-error:0.149155 
[157]   train-error:0.066512    validate-error:0.149400 
[158]   train-error:0.066109    validate-error:0.150380 
[159]   train-error:0.066047    validate-error:0.149400 
[160]   train-error:0.065767    validate-error:0.149400 
[161]   train-error:0.065550    validate-error:0.148910 
[162]   train-error:0.065364    validate-error:0.148910 
[163]   train-error:0.064992    validate-error:0.148910 
[164]   train-error:0.064186    validate-error:0.148420 
[165]   train-error:0.064093    validate-error:0.149155 
[166]   train-error:0.063597    validate-error:0.150869 
[167]   train-error:0.063380    validate-error:0.149890 
[168]   train-error:0.063287    validate-error:0.149890 
[169]   train-error:0.062884    validate-error:0.149400 
[170]   train-error:0.062853    validate-error:0.150135 
[171]   train-error:0.061984    validate-error:0.150380 
[172]   train-error:0.061705    validate-error:0.150625 
[173]   train-error:0.061829    validate-error:0.150380 
[174]   train-error:0.061612    validate-error:0.150380 
[175]   train-error:0.061426    validate-error:0.150135 
[176]   train-error:0.060527    validate-error:0.148420 
[177]   train-error:0.060031    validate-error:0.147686 
[178]   train-error:0.059876    validate-error:0.147196 
[179]   train-error:0.059566    validate-error:0.147441 
[180]   train-error:0.059442    validate-error:0.147930 
Stopping. Best iteration:
[110]   train-error:0.084651    validate-error:0.145726

Dygraph


dygraph(xg5$evaluation_log)

Tell xgboost to stop trying if it is improved for a while

xg6 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70 # stop if it doesn't got any better after x rounds
)
xg6$best_iteration
xg6$best_score

What is proper dept, too deep is overfitting and too shallow is not enoug coverage.

xg7 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  max_depth=8
)
[1] train-error:0.178264    validate-error:0.198139 
[2] train-error:0.166915    validate-error:0.186627 
[3] train-error:0.160372    validate-error:0.183688 
[4] train-error:0.155039    validate-error:0.177076 
[5] train-error:0.150047    validate-error:0.174382 
[6] train-error:0.147318    validate-error:0.171932 
[7] train-error:0.145364    validate-error:0.171687 
[8] train-error:0.142574    validate-error:0.171198 
[9] train-error:0.141333    validate-error:0.168504 
[10]    train-error:0.138605    validate-error:0.168014 
[11]    train-error:0.136279    validate-error:0.164585 
[12]    train-error:0.135907    validate-error:0.164585 
[13]    train-error:0.132186    validate-error:0.161891 
[14]    train-error:0.131070    validate-error:0.161646 
[15]    train-error:0.130419    validate-error:0.161891 
[16]    train-error:0.128434    validate-error:0.160666 
[17]    train-error:0.127287    validate-error:0.159442 
[18]    train-error:0.127008    validate-error:0.159687 
[19]    train-error:0.126698    validate-error:0.159442 
[20]    train-error:0.122915    validate-error:0.157972 
[21]    train-error:0.117643    validate-error:0.155768 
[22]    train-error:0.117178    validate-error:0.155278 
[23]    train-error:0.114760    validate-error:0.154788 
[24]    train-error:0.112279    validate-error:0.152584 
[25]    train-error:0.111969    validate-error:0.152339 
[26]    train-error:0.108930    validate-error:0.149645 
[27]    train-error:0.108403    validate-error:0.149400 
[28]    train-error:0.106357    validate-error:0.149155 
[29]    train-error:0.105612    validate-error:0.149890 
[30]    train-error:0.102915    validate-error:0.151114 
[31]    train-error:0.101891    validate-error:0.151849 
[32]    train-error:0.101581    validate-error:0.152339 
[33]    train-error:0.096961    validate-error:0.149645 
[34]    train-error:0.093829    validate-error:0.148665 
[35]    train-error:0.093395    validate-error:0.148420 
[36]    train-error:0.091783    validate-error:0.147441 
[37]    train-error:0.091442    validate-error:0.147441 
[38]    train-error:0.090822    validate-error:0.147686 
[39]    train-error:0.090016    validate-error:0.147930 
[40]    train-error:0.088620    validate-error:0.148175 
[41]    train-error:0.086791    validate-error:0.148910 
[42]    train-error:0.086729    validate-error:0.148665 
[43]    train-error:0.086140    validate-error:0.147930 
[44]    train-error:0.084000    validate-error:0.147196 
[45]    train-error:0.083225    validate-error:0.148175 
[46]    train-error:0.082760    validate-error:0.148175 
[47]    train-error:0.081767    validate-error:0.148175 
[48]    train-error:0.081488    validate-error:0.148665 
[49]    train-error:0.081054    validate-error:0.148175 
[50]    train-error:0.080310    validate-error:0.150135 
[51]    train-error:0.079349    validate-error:0.149155 
[52]    train-error:0.079163    validate-error:0.148420 
[53]    train-error:0.078419    validate-error:0.149155 
[54]    train-error:0.077333    validate-error:0.148665 
[55]    train-error:0.076744    validate-error:0.149400 
[56]    train-error:0.075752    validate-error:0.148665 
[57]    train-error:0.075194    validate-error:0.148420 
[58]    train-error:0.073550    validate-error:0.147930 
[59]    train-error:0.072744    validate-error:0.147686 
[60]    train-error:0.070729    validate-error:0.149155 
[61]    train-error:0.070109    validate-error:0.148665 
[62]    train-error:0.068775    validate-error:0.147686 
[63]    train-error:0.068093    validate-error:0.146951 
[64]    train-error:0.067349    validate-error:0.146951 
[65]    train-error:0.066729    validate-error:0.146216 
[66]    train-error:0.066233    validate-error:0.146706 
[67]    train-error:0.064682    validate-error:0.145481 
[68]    train-error:0.063628    validate-error:0.144502 
[69]    train-error:0.061333    validate-error:0.143522 
[70]    train-error:0.060434    validate-error:0.143032 
[71]    train-error:0.059721    validate-error:0.142052 
[72]    train-error:0.058698    validate-error:0.142787 
[73]    train-error:0.058419    validate-error:0.143032 
[74]    train-error:0.058264    validate-error:0.142787 
[75]    train-error:0.056372    validate-error:0.142542 
[76]    train-error:0.055132    validate-error:0.142297 
[77]    train-error:0.054822    validate-error:0.142297 
[78]    train-error:0.054977    validate-error:0.141563 
[79]    train-error:0.054202    validate-error:0.140828 
[80]    train-error:0.051907    validate-error:0.140828 
[81]    train-error:0.050326    validate-error:0.140093 
[82]    train-error:0.049705    validate-error:0.140093 
[83]    train-error:0.048713    validate-error:0.139358 
[84]    train-error:0.048155    validate-error:0.139113 
[85]    train-error:0.047628    validate-error:0.138868 
[86]    train-error:0.046977    validate-error:0.138624 
[87]    train-error:0.046419    validate-error:0.139113 
[88]    train-error:0.046264    validate-error:0.139358 
[89]    train-error:0.045147    validate-error:0.138868 
[90]    train-error:0.044620    validate-error:0.139358 
[91]    train-error:0.044465    validate-error:0.139848 
[92]    train-error:0.044496    validate-error:0.139848 
[93]    train-error:0.043907    validate-error:0.138868 
[94]    train-error:0.043597    validate-error:0.140093 
[95]    train-error:0.042667    validate-error:0.140828 
[96]    train-error:0.041395    validate-error:0.139848 
[97]    train-error:0.040899    validate-error:0.139113 
[98]    train-error:0.039752    validate-error:0.139603 
[99]    train-error:0.039287    validate-error:0.140338 
[100]   train-error:0.038729    validate-error:0.138868 
[101]   train-error:0.037767    validate-error:0.138134 
[102]   train-error:0.037550    validate-error:0.138624 
[103]   train-error:0.037333    validate-error:0.138868 
[104]   train-error:0.037333    validate-error:0.140828 
[105]   train-error:0.037147    validate-error:0.141073 
[106]   train-error:0.037147    validate-error:0.141073 
[107]   train-error:0.036837    validate-error:0.141563 
[108]   train-error:0.036341    validate-error:0.141563 
[109]   train-error:0.035442    validate-error:0.140828 
[110]   train-error:0.034078    validate-error:0.140338 
[111]   train-error:0.033643    validate-error:0.138868 
[112]   train-error:0.033023    validate-error:0.138868 
[113]   train-error:0.032961    validate-error:0.138868 
[114]   train-error:0.032868    validate-error:0.139113 
[115]   train-error:0.032868    validate-error:0.138868 
[116]   train-error:0.032868    validate-error:0.138868 
[117]   train-error:0.032527    validate-error:0.138624 
[118]   train-error:0.032465    validate-error:0.139113 
[119]   train-error:0.031814    validate-error:0.139113 
[120]   train-error:0.031566    validate-error:0.138379 
[121]   train-error:0.031411    validate-error:0.138624 
[122]   train-error:0.031318    validate-error:0.138624 
[123]   train-error:0.030729    validate-error:0.138134 
[124]   train-error:0.030357    validate-error:0.138134 
[125]   train-error:0.030295    validate-error:0.138624 
[126]   train-error:0.029457    validate-error:0.138134 
[127]   train-error:0.028713    validate-error:0.137644 
[128]   train-error:0.028651    validate-error:0.137399 
[129]   train-error:0.028713    validate-error:0.137399 
[130]   train-error:0.028124    validate-error:0.137644 
[131]   train-error:0.027597    validate-error:0.138868 
[132]   train-error:0.027225    validate-error:0.138134 
[133]   train-error:0.027163    validate-error:0.137889 
[134]   train-error:0.026357    validate-error:0.136664 
[135]   train-error:0.025984    validate-error:0.136664 
[136]   train-error:0.025922    validate-error:0.136909 
[137]   train-error:0.025922    validate-error:0.137154 
[138]   train-error:0.025612    validate-error:0.137399 
[139]   train-error:0.025457    validate-error:0.137399 
[140]   train-error:0.025333    validate-error:0.137154 
[141]   train-error:0.025209    validate-error:0.137644 
[142]   train-error:0.024620    validate-error:0.138134 
[143]   train-error:0.024372    validate-error:0.138379 
[144]   train-error:0.023628    validate-error:0.139113 
[145]   train-error:0.023411    validate-error:0.138868 
[146]   train-error:0.022853    validate-error:0.138868 
[147]   train-error:0.022636    validate-error:0.137644 
[148]   train-error:0.022016    validate-error:0.137154 
[149]   train-error:0.021705    validate-error:0.136419 
[150]   train-error:0.021519    validate-error:0.137154 
[151]   train-error:0.020806    validate-error:0.137399 
[152]   train-error:0.020744    validate-error:0.137644 
[153]   train-error:0.020310    validate-error:0.136664 
[154]   train-error:0.019814    validate-error:0.136664 
[155]   train-error:0.019690    validate-error:0.136664 
[156]   train-error:0.019380    validate-error:0.136909 
[157]   train-error:0.019132    validate-error:0.136419 
[158]   train-error:0.018822    validate-error:0.136174 
[159]   train-error:0.018543    validate-error:0.136909 
[160]   train-error:0.018140    validate-error:0.136664 
[161]   train-error:0.018016    validate-error:0.136419 
[162]   train-error:0.018016    validate-error:0.136174 
[163]   train-error:0.017674    validate-error:0.136419 
[164]   train-error:0.017519    validate-error:0.136174 
[165]   train-error:0.017395    validate-error:0.136419 
[166]   train-error:0.016899    validate-error:0.137154 
[167]   train-error:0.016713    validate-error:0.136174 
[168]   train-error:0.016651    validate-error:0.136419 
[169]   train-error:0.016341    validate-error:0.137154 
[170]   train-error:0.016124    validate-error:0.136664 
[171]   train-error:0.015907    validate-error:0.136664 
[172]   train-error:0.015876    validate-error:0.137154 
[173]   train-error:0.015628    validate-error:0.136909 
[174]   train-error:0.015628    validate-error:0.136909 
[175]   train-error:0.015442    validate-error:0.137154 
[176]   train-error:0.015380    validate-error:0.137154 
[177]   train-error:0.015318    validate-error:0.136909 
[178]   train-error:0.015132    validate-error:0.136664 
[179]   train-error:0.014822    validate-error:0.135929 
[180]   train-error:0.014698    validate-error:0.135929 
[181]   train-error:0.014388    validate-error:0.135929 
[182]   train-error:0.014016    validate-error:0.137154 
[183]   train-error:0.013891    validate-error:0.137399 
[184]   train-error:0.013860    validate-error:0.137889 
[185]   train-error:0.013767    validate-error:0.137399 
[186]   train-error:0.013550    validate-error:0.137399 
[187]   train-error:0.013147    validate-error:0.137644 
[188]   train-error:0.012651    validate-error:0.137154 
[189]   train-error:0.012651    validate-error:0.137154 
[190]   train-error:0.012620    validate-error:0.137399 
[191]   train-error:0.012310    validate-error:0.137154 
[192]   train-error:0.012000    validate-error:0.137154 
[193]   train-error:0.011163    validate-error:0.136419 
[194]   train-error:0.011008    validate-error:0.136174 
[195]   train-error:0.011008    validate-error:0.136419 
[196]   train-error:0.011008    validate-error:0.136419 
[197]   train-error:0.010946    validate-error:0.136664 
[198]   train-error:0.010977    validate-error:0.136419 
[199]   train-error:0.010946    validate-error:0.136174 
[200]   train-error:0.010977    validate-error:0.136174 
[201]   train-error:0.010946    validate-error:0.135685 
[202]   train-error:0.010853    validate-error:0.135685 
[203]   train-error:0.010574    validate-error:0.136174 
[204]   train-error:0.010140    validate-error:0.136174 
[205]   train-error:0.010109    validate-error:0.136664 
[206]   train-error:0.010078    validate-error:0.136909 
[207]   train-error:0.009829    validate-error:0.136664 
[208]   train-error:0.009705    validate-error:0.136419 
[209]   train-error:0.009488    validate-error:0.136664 
[210]   train-error:0.009302    validate-error:0.137154 
[211]   train-error:0.009116    validate-error:0.137154 
[212]   train-error:0.008992    validate-error:0.137154 
[213]   train-error:0.008992    validate-error:0.136909 
[214]   train-error:0.008713    validate-error:0.136909 
[215]   train-error:0.008558    validate-error:0.136664 
[216]   train-error:0.008403    validate-error:0.136419 
[217]   train-error:0.008093    validate-error:0.135929 
[218]   train-error:0.008093    validate-error:0.135440 
[219]   train-error:0.007876    validate-error:0.135195 
[220]   train-error:0.007845    validate-error:0.135195 
[221]   train-error:0.007783    validate-error:0.135195 
[222]   train-error:0.007597    validate-error:0.134950 
[223]   train-error:0.007442    validate-error:0.135195 
[224]   train-error:0.007256    validate-error:0.135929 
[225]   train-error:0.007225    validate-error:0.135440 
[226]   train-error:0.007256    validate-error:0.135929 
[227]   train-error:0.007194    validate-error:0.134705 
[228]   train-error:0.007132    validate-error:0.134460 
[229]   train-error:0.007008    validate-error:0.134950 
[230]   train-error:0.006884    validate-error:0.135929 
[231]   train-error:0.006884    validate-error:0.135685 
[232]   train-error:0.006884    validate-error:0.135929 
[233]   train-error:0.006729    validate-error:0.136174 
[234]   train-error:0.006698    validate-error:0.136174 
[235]   train-error:0.006574    validate-error:0.136419 
[236]   train-error:0.006512    validate-error:0.136174 
[237]   train-error:0.006512    validate-error:0.135929 
[238]   train-error:0.006326    validate-error:0.135685 
[239]   train-error:0.006295    validate-error:0.135440 
[240]   train-error:0.006295    validate-error:0.135440 
[241]   train-error:0.006078    validate-error:0.135195 
[242]   train-error:0.005984    validate-error:0.135195 
[243]   train-error:0.005767    validate-error:0.134705 
[244]   train-error:0.005736    validate-error:0.134215 
[245]   train-error:0.005674    validate-error:0.134460 
[246]   train-error:0.005550    validate-error:0.132990 
[247]   train-error:0.005209    validate-error:0.131766 
[248]   train-error:0.005054    validate-error:0.132011 
[249]   train-error:0.004992    validate-error:0.132746 
[250]   train-error:0.004930    validate-error:0.134215 
[251]   train-error:0.004806    validate-error:0.133725 
[252]   train-error:0.004806    validate-error:0.133970 
[253]   train-error:0.004837    validate-error:0.133970 
[254]   train-error:0.004837    validate-error:0.133970 
[255]   train-error:0.004837    validate-error:0.133970 
[256]   train-error:0.004837    validate-error:0.134215 
[257]   train-error:0.004837    validate-error:0.134705 
[258]   train-error:0.004775    validate-error:0.134705 
[259]   train-error:0.004589    validate-error:0.134460 
[260]   train-error:0.004620    validate-error:0.133970 
[261]   train-error:0.004527    validate-error:0.133235 
[262]   train-error:0.004465    validate-error:0.133235 
[263]   train-error:0.004403    validate-error:0.133725 
[264]   train-error:0.004062    validate-error:0.133480 
[265]   train-error:0.003938    validate-error:0.133480 
[266]   train-error:0.003969    validate-error:0.133480 
[267]   train-error:0.003938    validate-error:0.133235 
[268]   train-error:0.003907    validate-error:0.133480 
[269]   train-error:0.003907    validate-error:0.132746 
[270]   train-error:0.003752    validate-error:0.131766 
[271]   train-error:0.003752    validate-error:0.131521 
[272]   train-error:0.003721    validate-error:0.131766 
[273]   train-error:0.003721    validate-error:0.131766 
[274]   train-error:0.003473    validate-error:0.132990 
[275]   train-error:0.003349    validate-error:0.133725 
[276]   train-error:0.003318    validate-error:0.133970 
[277]   train-error:0.003194    validate-error:0.133480 
[278]   train-error:0.003194    validate-error:0.133480 
[279]   train-error:0.003101    validate-error:0.133725 
[280]   train-error:0.002977    validate-error:0.133970 
[281]   train-error:0.002977    validate-error:0.133970 
[282]   train-error:0.002946    validate-error:0.134215 
[283]   train-error:0.002884    validate-error:0.134215 
[284]   train-error:0.002853    validate-error:0.134460 
[285]   train-error:0.002791    validate-error:0.133725 
[286]   train-error:0.002667    validate-error:0.133725 
[287]   train-error:0.002667    validate-error:0.133970 
[288]   train-error:0.002636    validate-error:0.133970 
[289]   train-error:0.002636    validate-error:0.133725 
[290]   train-error:0.002636    validate-error:0.134460 
[291]   train-error:0.002636    validate-error:0.135195 
[292]   train-error:0.002636    validate-error:0.134705 
[293]   train-error:0.002636    validate-error:0.134460 
[294]   train-error:0.002636    validate-error:0.134460 
[295]   train-error:0.002636    validate-error:0.134950 
[296]   train-error:0.002605    validate-error:0.134705 
[297]   train-error:0.002574    validate-error:0.134460 
[298]   train-error:0.002574    validate-error:0.134215 
[299]   train-error:0.002574    validate-error:0.134215 
[300]   train-error:0.002450    validate-error:0.133480 

Stop if it does not improve after n rounds

xg8 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70,
  max_depth=3
)
[1] train-error:0.224651    validate-error:0.227774 
Multiple eval metrics are present. Will use validate_error for early stopping.
Will train until validate_error hasn't improved in 70 rounds.

[2] train-error:0.217178    validate-error:0.215283 
[3] train-error:0.222326    validate-error:0.227529 
[4] train-error:0.207597    validate-error:0.206956 
[5] train-error:0.210884    validate-error:0.208180 
[6] train-error:0.209116    validate-error:0.205976 
[7] train-error:0.206698    validate-error:0.204751 
[8] train-error:0.206109    validate-error:0.205731 
[9] train-error:0.204372    validate-error:0.204017 
[10]    train-error:0.204775    validate-error:0.203772 
[11]    train-error:0.199070    validate-error:0.199608 
[12]    train-error:0.197829    validate-error:0.196669 
[13]    train-error:0.194202    validate-error:0.192995 
[14]    train-error:0.192930    validate-error:0.192995 
[15]    train-error:0.192279    validate-error:0.191036 
[16]    train-error:0.191752    validate-error:0.190056 
[17]    train-error:0.191256    validate-error:0.190546 
[18]    train-error:0.190636    validate-error:0.189077 
[19]    train-error:0.190016    validate-error:0.189077 
[20]    train-error:0.189705    validate-error:0.189077 
[21]    train-error:0.189085    validate-error:0.189811 
[22]    train-error:0.189333    validate-error:0.189566 
[23]    train-error:0.187783    validate-error:0.186138 
[24]    train-error:0.187256    validate-error:0.185893 
[25]    train-error:0.185581    validate-error:0.187117 
[26]    train-error:0.184186    validate-error:0.185893 
[27]    train-error:0.182109    validate-error:0.186627 
[28]    train-error:0.180558    validate-error:0.183444 
[29]    train-error:0.180651    validate-error:0.183444 
[30]    train-error:0.180341    validate-error:0.182464 
[31]    train-error:0.179659    validate-error:0.184178 
[32]    train-error:0.178667    validate-error:0.184178 
[33]    train-error:0.177333    validate-error:0.183199 
[34]    train-error:0.176558    validate-error:0.181974 
[35]    train-error:0.176620    validate-error:0.181239 
[36]    train-error:0.176031    validate-error:0.180505 
[37]    train-error:0.175256    validate-error:0.180260 
[38]    train-error:0.175070    validate-error:0.180015 
[39]    train-error:0.174667    validate-error:0.179770 
[40]    train-error:0.173085    validate-error:0.177810 
[41]    train-error:0.172682    validate-error:0.177810 
[42]    train-error:0.172217    validate-error:0.178055 
[43]    train-error:0.172062    validate-error:0.179770 
[44]    train-error:0.171752    validate-error:0.180749 
[45]    train-error:0.171287    validate-error:0.180749 
[46]    train-error:0.170729    validate-error:0.180260 
[47]    train-error:0.170667    validate-error:0.180015 
[48]    train-error:0.170109    validate-error:0.180015 
[49]    train-error:0.169581    validate-error:0.181239 
[50]    train-error:0.168992    validate-error:0.180505 
[51]    train-error:0.169054    validate-error:0.180260 
[52]    train-error:0.167349    validate-error:0.180260 
[53]    train-error:0.166667    validate-error:0.180015 
[54]    train-error:0.166605    validate-error:0.180260 
[55]    train-error:0.166326    validate-error:0.179525 
[56]    train-error:0.165984    validate-error:0.180015 
[57]    train-error:0.165767    validate-error:0.179770 
[58]    train-error:0.165705    validate-error:0.179770 
[59]    train-error:0.165488    validate-error:0.179525 
[60]    train-error:0.165488    validate-error:0.180994 
[61]    train-error:0.165550    validate-error:0.180994 
[62]    train-error:0.165302    validate-error:0.180260 
[63]    train-error:0.164031    validate-error:0.180015 
[64]    train-error:0.163783    validate-error:0.180015 
[65]    train-error:0.163628    validate-error:0.179280 
[66]    train-error:0.163628    validate-error:0.179035 
[67]    train-error:0.164062    validate-error:0.178790 
[68]    train-error:0.163225    validate-error:0.177566 
[69]    train-error:0.163287    validate-error:0.178055 
[70]    train-error:0.162760    validate-error:0.179035 
[71]    train-error:0.162760    validate-error:0.178545 
[72]    train-error:0.162450    validate-error:0.179035 
[73]    train-error:0.161798    validate-error:0.180015 
[74]    train-error:0.161736    validate-error:0.180015 
[75]    train-error:0.160713    validate-error:0.179035 
[76]    train-error:0.160651    validate-error:0.178790 
[77]    train-error:0.159535    validate-error:0.178055 
[78]    train-error:0.159628    validate-error:0.177566 
[79]    train-error:0.159070    validate-error:0.177076 
[80]    train-error:0.158946    validate-error:0.178790 
[81]    train-error:0.158574    validate-error:0.178055 
[82]    train-error:0.158233    validate-error:0.176831 
[83]    train-error:0.158481    validate-error:0.176586 
[84]    train-error:0.158171    validate-error:0.176096 
[85]    train-error:0.157395    validate-error:0.174382 
[86]    train-error:0.157364    validate-error:0.174137 
[87]    train-error:0.157178    validate-error:0.174626 
[88]    train-error:0.157054    validate-error:0.174626 
[89]    train-error:0.156372    validate-error:0.174382 
[90]    train-error:0.156403    validate-error:0.174382 
[91]    train-error:0.156341    validate-error:0.174382 
[92]    train-error:0.156000    validate-error:0.175361 
[93]    train-error:0.155969    validate-error:0.174626 
[94]    train-error:0.155752    validate-error:0.174382 
[95]    train-error:0.155535    validate-error:0.173647 
[96]    train-error:0.155504    validate-error:0.173647 
[97]    train-error:0.154295    validate-error:0.172667 
[98]    train-error:0.154295    validate-error:0.172422 
[99]    train-error:0.153860    validate-error:0.172177 
[100]   train-error:0.153860    validate-error:0.173157 
[101]   train-error:0.153767    validate-error:0.172667 
[102]   train-error:0.153674    validate-error:0.172912 
[103]   train-error:0.153302    validate-error:0.172912 
[104]   train-error:0.153302    validate-error:0.172912 
[105]   train-error:0.153271    validate-error:0.173157 
[106]   train-error:0.153116    validate-error:0.173402 
[107]   train-error:0.152961    validate-error:0.173402 
[108]   train-error:0.152775    validate-error:0.172667 
[109]   train-error:0.152589    validate-error:0.172177 
[110]   train-error:0.152000    validate-error:0.171932 
[111]   train-error:0.151752    validate-error:0.171443 
[112]   train-error:0.151349    validate-error:0.171443 
[113]   train-error:0.151473    validate-error:0.171443 
[114]   train-error:0.151287    validate-error:0.170463 
[115]   train-error:0.150822    validate-error:0.171687 
[116]   train-error:0.149798    validate-error:0.171687 
[117]   train-error:0.149767    validate-error:0.172177 
[118]   train-error:0.149457    validate-error:0.171932 
[119]   train-error:0.149333    validate-error:0.171932 
[120]   train-error:0.149302    validate-error:0.171932 
[121]   train-error:0.149333    validate-error:0.171932 
[122]   train-error:0.149333    validate-error:0.172177 
[123]   train-error:0.149426    validate-error:0.171687 
[124]   train-error:0.148806    validate-error:0.171443 
[125]   train-error:0.148868    validate-error:0.171198 
[126]   train-error:0.148372    validate-error:0.171932 
[127]   train-error:0.148589    validate-error:0.171443 
[128]   train-error:0.148558    validate-error:0.172422 
[129]   train-error:0.148248    validate-error:0.171932 
[130]   train-error:0.147907    validate-error:0.172177 
[131]   train-error:0.147845    validate-error:0.171932 
[132]   train-error:0.147442    validate-error:0.172422 
[133]   train-error:0.147225    validate-error:0.172667 
[134]   train-error:0.147039    validate-error:0.171687 
[135]   train-error:0.146853    validate-error:0.171443 
[136]   train-error:0.146915    validate-error:0.171443 
[137]   train-error:0.146543    validate-error:0.171198 
[138]   train-error:0.146543    validate-error:0.171198 
[139]   train-error:0.146326    validate-error:0.171198 
[140]   train-error:0.146357    validate-error:0.171198 
[141]   train-error:0.146295    validate-error:0.171443 
[142]   train-error:0.146264    validate-error:0.171443 
[143]   train-error:0.146202    validate-error:0.170953 
[144]   train-error:0.146140    validate-error:0.171687 
[145]   train-error:0.145023    validate-error:0.168748 
[146]   train-error:0.144527    validate-error:0.169238 
[147]   train-error:0.144434    validate-error:0.168748 
[148]   train-error:0.144031    validate-error:0.168504 
[149]   train-error:0.143907    validate-error:0.168748 
[150]   train-error:0.144031    validate-error:0.168504 
[151]   train-error:0.143690    validate-error:0.168993 
[152]   train-error:0.143566    validate-error:0.169238 
[153]   train-error:0.143349    validate-error:0.168748 
[154]   train-error:0.143349    validate-error:0.168748 
[155]   train-error:0.143318    validate-error:0.168504 
[156]   train-error:0.143256    validate-error:0.168504 
[157]   train-error:0.143380    validate-error:0.168504 
[158]   train-error:0.143163    validate-error:0.168504 
[159]   train-error:0.142543    validate-error:0.168993 
[160]   train-error:0.142233    validate-error:0.169238 
[161]   train-error:0.142202    validate-error:0.169238 
[162]   train-error:0.141984    validate-error:0.169728 
[163]   train-error:0.141953    validate-error:0.169973 
[164]   train-error:0.141860    validate-error:0.169973 
[165]   train-error:0.141674    validate-error:0.170463 
[166]   train-error:0.141240    validate-error:0.170463 
[167]   train-error:0.141364    validate-error:0.170463 
[168]   train-error:0.140775    validate-error:0.170218 
[169]   train-error:0.140775    validate-error:0.170218 
[170]   train-error:0.140496    validate-error:0.168993 
[171]   train-error:0.140310    validate-error:0.168993 
[172]   train-error:0.140248    validate-error:0.169238 
[173]   train-error:0.140155    validate-error:0.168993 
[174]   train-error:0.139752    validate-error:0.167279 
[175]   train-error:0.139690    validate-error:0.167279 
[176]   train-error:0.139752    validate-error:0.166789 
[177]   train-error:0.139690    validate-error:0.167034 
[178]   train-error:0.139349    validate-error:0.167279 
[179]   train-error:0.139504    validate-error:0.167524 
[180]   train-error:0.139442    validate-error:0.167279 
[181]   train-error:0.139504    validate-error:0.167524 
[182]   train-error:0.138884    validate-error:0.166789 
[183]   train-error:0.138698    validate-error:0.166299 
[184]   train-error:0.138667    validate-error:0.166789 
[185]   train-error:0.138140    validate-error:0.166544 
[186]   train-error:0.137798    validate-error:0.166299 
[187]   train-error:0.137829    validate-error:0.166299 
[188]   train-error:0.137829    validate-error:0.165809 
[189]   train-error:0.137705    validate-error:0.165565 
[190]   train-error:0.137705    validate-error:0.165565 
[191]   train-error:0.137147    validate-error:0.163850 
[192]   train-error:0.137054    validate-error:0.163850 
[193]   train-error:0.136992    validate-error:0.163850 
[194]   train-error:0.137054    validate-error:0.164340 
[195]   train-error:0.136868    validate-error:0.164340 
[196]   train-error:0.136837    validate-error:0.164340 
[197]   train-error:0.136775    validate-error:0.164340 
[198]   train-error:0.136775    validate-error:0.164095 
[199]   train-error:0.136744    validate-error:0.163850 
[200]   train-error:0.136713    validate-error:0.162626 
[201]   train-error:0.136341    validate-error:0.163115 
xg7$best_score
xg8$best_score

Can do grid search. Random search more popular than grid search

Pseudo random forest, not true random forest.

xg9 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=10,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70,
  max_depth=3,
  subsample=0.5, colsample_bytree=0.5, # for each three only choose half of the columns
  num_parallel_tree=50 # 50 trees at a time boosting 10 times
)
[1] train-error:0.241705    validate-error:0.241979 
Multiple eval metrics are present. Will use validate_error for early stopping.
Will train until validate_error hasn't improved in 70 rounds.

[2] train-error:0.241705    validate-error:0.241979 
[3] train-error:0.241705    validate-error:0.241979 
[4] train-error:0.241705    validate-error:0.241979 
[5] train-error:0.241705    validate-error:0.241979 
[6] train-error:0.241705    validate-error:0.241979 
[7] train-error:0.241705    validate-error:0.241979 
[8] train-error:0.241767    validate-error:0.241979 
[9] train-error:0.241767    validate-error:0.241979 
[10]    train-error:0.241736    validate-error:0.241979 

Option, ncore, using multiple cores at a time. Can do xgboost in parallel –> How ? Search of splits can be done in parallel. Can do multiple searches in parallel. In GPU can do massive speedups.

XGboost is amazing algorithm.

In no audio, picture etc. XGboost is better. For vector data XGboost is amazingly fast.

G stands for gradient. Extreme Gradient Boosting.

XGboost takes average for regressing tree.

Create variabele importance plot

xgb.plot.importance(
  xgb.importance(xg7, feature_names = colnames(landX_train))
)

Predictions

Probability predictions

Binary predictions

prediction <- as.numeric(pred > 0.5)

Measure performance

err <- mean(as.numeric(pred > 0.5) != landY_test)
longer object length is not a multiple of shorter object length
print(paste("test-error=", err))
[1] "test-error= 0.295503875968992"

References

https://cran.r-project.org/web/packages/xgboost/vignettes/xgboostPresentation.html https://data.world/landeranalytics/training

---
title: "XGBOOST"
author: "Arie Twigt"
output: html_notebook
---

Elasticnet


$$
  Y = f(X)
$$

$$
y = f(x) = a + bx
$$

For xboost you do need dummy variables.


Trees. Figure out in bins. 

Room, e.g. simple average.
Deviding population in buckets --> how do people define the buckets?

Decision tree --> bucket splits.


$$
  \hat{f} = \sum{m=1}^M \hat{c}_mI(x\in R_m)

$$

Decision in the decision tree is making the splits

cat --> categorisation and decision tree



Breaking data in M regions. For each region the function is becoming the average of a region. You are in a region or not.

AI: 95% time is logistic regression or if/else statement

```{r}
library(coefplot)
library(xgboost)
library(magrittr)
library(dygraphs)
library(useful)
```

```{r}

land_train <- readRDS('data/manhattan_Train.rds')
land_test <- readRDS('data/manhattan_Test.rds')
land_val <- readRDS('data/manhattan_Validate.rds')
```


XGboost written by computer engineers so therefore they want an integer.

```{r}
set.seed(1123)

table(land_train$HistoricDistrict)
histFormula <- HistoricDistrict ~ FireService + 
  ZoneDist1 + ZoneDist2 + Class + LandUse + 
  OwnerType + LotArea + BldgArea + ComArea + 
  ResArea + OfficeArea + RetailArea + 
  GarageArea + FactryArea + NumBldgs + 
  NumFloors + UnitsRes + UnitsTotal + 
  LotFront + LotDepth + BldgFront + 
  BldgDepth + LotType + Landmark + BuiltFAR +
  Built + TotalValue - 1 # subtracting intersept since a tree does not need an intercept

landX_train <- build.x(histFormula, data=land_train, contrasts=FALSE, sparse=TRUE)

landY_train <- build.y(histFormula, data=land_train) %>% 
  as.integer() - 1

landX_test <- build.x(histFormula, data=land_test, contrasts=FALSE, sparse=TRUE)

landY_test <- build.y(histFormula, data=land_test) %>% 
  as.integer() - 1

landX_val <- build.x(histFormula, data=land_val, contrasts=FALSE, sparse=TRUE)

landY_val <- build.y(histFormula, data=land_val) %>% 
  as.integer() - 1

landY_train

# data has to be converted into integer therfore used the '%>%' pipe

```

Xgboost does validation but does not do it clean. Third dataset is used for ....



Xboost want things like a certain way, as XGboost objects.
It is like a list holding objects

```{r}
xgTrain <- xgb.DMatrix(data=landX_train, label=landY_train)
xgVal <- xgb.DMatrix(data=landX_val, label=landY_val)
```


Fit the first model.

Rpart --> recursive partitioning.

```{r}
xg1 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',   # what you want to accomplish, like a cost function
  nrounds=1
)
```

If it will be countinous: 'linear:regression' or something like that

We have built the tree

```{r}
xgb.plot.multi.trees(xg1, feature_names=colnames(landX_train))
```



You let the computer the designs for you.

The default maximum it can go is 60 in a tree. Default is six (this is a type of hyper parameter).

Is this better than random forest? This is still one tree. It is understandabele, but also highly variabele. You want stable results. Tree is averaged and will do better at different parts of the room. 

When you build the tree, you use all the data.
With random forest you random your sample and random your columns. --> Bagging (Bootstrap Aggregated)
--> Great, greatest of all time. Fast, blackboxed.

Then, Boosting --> Most famous for trees --> but can be used for everything

THey boosted whatever the wanted to boost.

Boosting. Workst for anything.
You fit a model, see how well you did. Use that to adjust your weights for next model. E.g. next model fitted on residuals(errors) of prior model... and so fort and keep stacking itself. Model --> Adjust weights --> model
After that a normalization step (like in case of category probabilities)

$$
  \hat{y}_i^t = \sum_{k=1}^tf_k(x_i) = \hat{y}_i{(t-1)} + f_t(x_i)
$$

It does not matter what model it is, it is about which one predicts the best.

```{r}
xg2 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=1,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)

```

$$
  \text{logloss}=ylog(p) _ (1-y)log(1-p_i)
$$




Additive trees. In random forest, you independently train

Boosting, trees on top of that


Instead of one tree lets build 100 trees. The logloss will go down. Just by Boosting.


```{r}
xg3 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=100,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)

```


```{r}
xg4 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_metric='logloss', # how right where you, or how wrong where you
  watchlist=list(train=xgTrain)
)

```


You can keep boosting forever, but it might be overfitting?
**This is on the training data** it alwways goes better on the traiing data.

--> More dept (more leaves), greater chance to overfitting

Validate data

```{r}
xg5 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70
)

```


Dygraph

```{r}

dygraph(xg5$evaluation_log)
```


Tell xgboost to stop trying if it is improved for a while

```{r}
xg6 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70 # stop if it doesn't got any better after x rounds
)

```



```{r}
xg6$best_iteration
```

```{r}
xg6$best_score
```


What is proper dept, too deep is overfitting and too shallow is not enoug coverage.

```{r}

xg7 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  max_depth=8
)
```

Stop if it does not improve after *n* rounds

```{r}

xg8 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=300,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70,
  max_depth=3
)
```


```{r}
xg7$best_score
xg8$best_score
```


Can do grid search. Random search more popular than grid search



Pseudo random forest, not true random forest.

```{r}

xg9 <- xgb.train(
  data=xgTrain,
  objective='binary:logistic',
  nrounds=10,
  eval_matric='logloss',
  watchlist=list(train=xgTrain, validate=xgVal),
  early_stopping_rounds = 70,
  max_depth=3,
  subsample=0.5, colsample_bytree=0.5, # for each three only choose half of the columns
  num_parallel_tree=50 # 50 trees at a time boosting 10 times
)

```

Option, ncore, using multiple cores at a time. Can do xgboost in parallel --> How ?
Search of splits can be done in parallel. Can do multiple searches in parallel. In GPU can do massive speedups.


XGboost is amazing algorithm.

In no audio, picture etc. XGboost is better. For vector data XGboost is amazingly fast.

G stands for gradient. Extreme Gradient Boosting.

XGboost takes average for regressing tree.

Create variabele importance plot

```{r}
xgb.plot.importance(
  xgb.importance(xg7, feature_names = colnames(landX_train))
)
```

# Predictions

Probability predictions

```{r}
pred <- predict(xg9, xgTrain)
```

Binary predictions

```{r}
prediction <- as.numeric(pred > 0.5)
```

Measure performance

```{r}
err <- mean(as.numeric(pred > 0.5) != landY_test)
print(paste("test-error=", err))
```

# References

https://cran.r-project.org/web/packages/xgboost/vignettes/xgboostPresentation.html
https://data.world/landeranalytics/training